7 research outputs found

    Reading between the lines of code : visualising a program’s lifetime

    Get PDF
    Visual representations of systems or processes are rife in all fields of science and engineering due to the concise yet effusive descriptions such representations convey. Humans’ pervasive tendency to visualise has led to various methods being evolved through the years to represent different aspects of software. However visualising running software has been fraught with the challenges of providing a meaningful representation of a process which is stripped of meaningful cues and reduced to manipulating values and the field has consequently evolved very slowly. Visualising running software is particularly useful for analysing the behaviour of software (e.g. software written to make use of late binding) and to gain a better understanding of the ever-important assessment of how well the final product is fulfilling the initial request. This paper discusses the significance of gaining improved insight into a program’s lifetime and demonstrates how attributing a geometric sense to the design of computer languages can serve to make it easier to visualise the execution of software by shifting the focus of semantics towards the spatial organisation of program parts.peer-reviewe

    EMU: Rapid prototyping of networking services

    Get PDF
    Due to their performance and flexibility, FPGAs are an attractive platform for the execution of network functions. It has been a challenge for a long time though to make FPGA programming accessible to a large audience of developers. An appealing solution is to compile code from a general-purpose language to hardware using high-level synthesis. Unfortunately, current approaches to implement rich network functionality are insufficient because they lack: (i) libraries with abstractions for common network operations and data structures, (ii) bindings to the underlying “substrate” on the FPGA, and (iii) debugging and profiling support. This paper describes Emu, a new standard library for an FPGA hardware compiler that enables developers to rapidly create and deploy network functionality. Emu allows for high-performance designs without being bound to particular packet processing paradigms. Furthermore, it supports running the same programs on CPUs, in Mininet, and on FPGAs, providing a better development environment that includes advanced debugging capabilities. We demonstrate that network functions implemented using Emu have only negligible resource and performance overheads compared with natively-written hardware versions

    Large expert-curated database for benchmarking document similarity detection in biomedical literature search

    Get PDF
    Document recommendation systems for locating relevant literature have mostly relied on methods developed a decade ago. This is largely due to the lack of a large offline gold-standard benchmark of relevant documents that cover a variety of research fields such that newly developed literature search techniques can be compared, improved and translated into practice. To overcome this bottleneck, we have established the RElevant LIterature SearcH consortium consisting of more than 1500 scientists from 84 countries, who have collectively annotated the relevance of over 180 000 PubMed-listed articles with regard to their respective seed (input) article/s. The majority of annotations were contributed by highly experienced, original authors of the seed articles. The collected data cover 76% of all unique PubMed Medical Subject Headings descriptors. No systematic biases were observed across different experience levels, research fields or time spent on annotations. More importantly, annotations of the same document pairs contributed by different scientists were highly concordant. We further show that the three representative baseline methods used to generate recommended articles for evaluation (Okapi Best Matching 25, Term Frequency-Inverse Document Frequency and PubMed Related Articles) had similar overall performances. Additionally, we found that these methods each tend to produce distinct collections of recommended articles, suggesting that a hybrid method may be required to completely capture all relevant articles. The established database server located at https://relishdb.ict.griffith.edu.au is freely available for the downloading of annotation data and the blind testing of new methods. We expect that this benchmark will be useful for stimulating the development of new powerful techniques for title and title/abstract-based search engines for relevant articles in biomedical research.Peer reviewe

    Reading Between the Lines of Code: Visualising a Program’s Lifetime

    No full text
    Abstract. Visual representations of systems or processes are rife in all fields of science and engineering due to the concise yet effusive descriptions such representations convey. Humans’ pervasive tendency to visualise has led to various methods being evolved through the years to represent different aspects of software. However visualising running software has been fraught with the challenges of providing a meaningful representation of a process which is stripped of meaningful cues and reduced to manipulating values and the field has consequently evolved very slowly. Visualising running software is particularly useful for analysing the behaviour of software (e.g. software written to make use of late binding) and to gain a better understanding of the ever-important assessment of how well the final product is fulfilling the initial request. This paper discusses the significance of gaining improved insight into a program’s lifetime and demonstrates how attributing a geometric sense to the design of computer languages can serve to make it easier to visualise the execution of software by shifting the focus of semantics towards the spatial organisation of program parts.

    Large expert-curated database for benchmarking document similarity detection in biomedical literature search

    No full text

    Guidelines for the use and interpretation of assays for monitoring autophagy (4th edition)

    No full text
    In 2008, we published the first set of guidelines for standardizing research in autophagy. Since then, this topic has received increasing attention, and many scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Thus, it is important to formulate on a regular basis updated guidelines for monitoring autophagy in different organisms. Despite numerous reviews, there continues to be confusion regarding acceptable methods to evaluate autophagy, especially in multicellular eukaryotes. Here, we present a set of guidelines for investigators to select and interpret methods to examine autophagy and related processes, and for reviewers to provide realistic and reasonable critiques of reports that are focused on these processes. These guidelines are not meant to be a dogmatic set of rules, because the appropriateness of any assay largely depends on the question being asked and the system being used. Moreover, no individual assay is perfect for every situation, calling for the use of multiple techniques to properly monitor autophagy in each experimental setting. Finally, several core components of the autophagy machinery have been implicated in distinct autophagic processes (canonical and noncanonical autophagy), implying that genetic approaches to block autophagy should rely on targeting two or more autophagy-related genes that ideally participate in distinct steps of the pathway. Along similar lines, because multiple proteins involved in autophagy also regulate other cellular pathways including apoptosis, not all of them can be used as a specific marker for bona fide autophagic responses. Here, we critically discuss current methods of assessing autophagy and the information they can, or cannot, provide. Our ultimate goal is to encourage intellectual and technical innovation in the field
    corecore